Penalizing Small Errors Using an Adaptive Logarithmic Loss
نویسندگان
چکیده
Loss functions are error metrics that quantify the difference between a prediction and its corresponding ground truth. Fundamentally, they define functional landscape for traversal by gradient descent. Although numerous loss have been proposed to date in order handle various machine learning problems, little attention has given enhancing these better traverse landscape. In this paper, we simultaneously significantly mitigate two prominent problems medical image segmentation namely: i) class imbalance foreground background pixels ii) poor function convergence. To end, propose an Adaptive Logarithmic (ALL) function. We compare with existing state-of-the-art on ISIC 2018 dataset, nuclei dataset as well DRIVE retinal vessel dataset. measure performance of our methodology benchmark demonstrate performance. More generally, show system can be used framework training deep neural networks.
منابع مشابه
Distributed adaptive sampling using bounded-errors
This paper presents a communication/coordination/ processing architecture for distributed adaptive observation of a spatial field using a fleet of autonomous mobile sensors. One of the key difficulties in this context is to design scalable algorithms for incremental fusion of information across platforms robust to what is known as the “rumor problem”. Incremental fusion is in general based on a...
متن کاملVariance Penalizing AdaBoost
This paper proposes a novel boosting algorithm called VadaBoost which is motivated by recent empirical Bernstein bounds. VadaBoost iteratively minimizes a cost function that balances the sample mean and the sample variance of the exponential loss. Each step of the proposed algorithm minimizes the cost efficiently by providing weighted data to a weak learner rather than requiring a brute force e...
متن کاملFractal dimension and logarithmic loss unpredictability
We show that the Hausdorff dimension equals the logarithmic loss unpredictability for any set of infinite sequences over a finite alphabet. Using computable, feasible, and finite-state predictors, this equivalence also holds for the computable, feasible, and finite-state dimensions. Combining this with recent results of Fortnow and Lutz (2002), we have a tight relationship between prediction wi...
متن کاملAn Adaptive Segmentation Method Using Fractal Dimension and Wavelet Transform
In analyzing a signal, especially a non-stationary signal, it is often necessary the desired signal to be segmented into small epochs. Segmentation can be performed by splitting the signal at time instances where signal amplitude or frequency change. In this paper, the signal is initially decomposed into signals with different frequency bands using wavelet transform. Then, fractal dimension of ...
متن کاملMonetary Incentives in Speeded Perceptual Decision: Effects of Penalizing Errors Versus Slow Responses
The influence of monetary incentives on performance has been widely investigated among various disciplines. While the results reveal positive incentive effects only under specific conditions, the exact nature, and the contribution of mediating factors are largely unexplored. The present study examined influences of payoff schemes as one of these factors. In particular, we manipulated penalties ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-68763-2_28